Goto

Collaborating Authors

 similar feature


Meta's ChatGPT competitor includes conversational voice chat and a social feed

Engadget

Meta didn't wait for Tuesday's LlamaCon keynote to unveil its first big AI announcement of the week. The company launched a standalone app that competes with ChatGPT, Gemini, Claude and other multimodal AI chatbots. Sticking to the company's roots, the app also includes a social feed and the ability to draw on info from your profile and posts you've shared. The Meta AI app offers similar features to rival chatbots, including text and voice chats, live web access and the ability to generate and edit images. But it also includes a Discover feed that (for better or worse) adds a social element to your AI queries.


GCondNet: A Novel Method for Improving Neural Networks on Small High-Dimensional Tabular Data

Margeloiu, Andrei, Simidjievski, Nikola, Lio, Pietro, Jamnik, Mateja

arXiv.org Artificial Intelligence

Neural network models often struggle with high-dimensional but small sample-size tabular datasets. One reason is that current weight initialisation methods assume independence between weights, which can be problematic when there are insufficient samples to estimate the model's parameters accurately. In such small data scenarios, leveraging additional structures can improve the model's performance and training stability. To address this, we propose GCondNet, a general approach to enhance neural networks by leveraging implicit structures present in tabular data. We create a graph between samples for each data dimension, and utilise Graph Neural Networks (GNNs) for extracting this implicit structure, and for conditioning the parameters of the first layer of an underlying predictor network. By creating many small graphs, GCondNet exploits the data's high-dimensionality, and thus improves the performance of an underlying predictor network. We demonstrate the effectiveness of our method on 9 real-world datasets, where GCondNet outperforms 15 standard and state-of-the-art methods. The results show that GCondNet is a versatile framework for injecting graph-regularisation into various types of neural networks, including MLPs and tabular Transformers.


CLR-GAM: Contrastive Point Cloud Learning with Guided Augmentation and Feature Mapping

Malla, Srikanth, Chen, Yi-Ting

arXiv.org Artificial Intelligence

Point cloud data plays an essential role in robotics and self-driving applications. Yet, annotating point cloud data is time-consuming and nontrivial while they enable learning discriminative 3D representations that empower downstream tasks, such as classification and segmentation. Recently, contrastive learning-based frameworks have shown promising results for learning 3D representations in a self-supervised manner. However, existing contrastive learning methods cannot precisely encode and associate structural features and search the higher dimensional augmentation space efficiently. In this paper, we present CLR-GAM, a novel contrastive learning-based framework with Guided Augmentation (GA) for efficient dynamic exploration strategy and Guided Feature Mapping (GFM) for similar structural feature association between augmented point clouds. We empirically demonstrate that the proposed approach achieves state-of-the-art performance on both simulated and real-world 3D point cloud datasets for three different downstream tasks, i.e., 3D point cloud classification, few-shot learning, and object part segmentation.


Employing an Adjusted Stability Measure for Multi-Criteria Model Fitting on Data Sets with Similar Features

Bommert, Andrea, Rahnenführer, Jörg, Lang, Michel

arXiv.org Machine Learning

Fitting models with high predictive accuracy that include all relevant but no irrelevant or redundant features is a challenging task on data sets with similar (e.g. highly correlated) features. We propose the approach of tuning the hyperparameters of a predictive model in a multi-criteria fashion with respect to predictive accuracy and feature selection stability. We evaluate this approach based on both simulated and real data sets and we compare it to the standard approach of single-criteria tuning of the hyperparameters as well as to the state-of-the-art technique "stability selection". We conclude that our approach achieves the same or better predictive performance compared to the two established approaches. Considering the stability during tuning does not decrease the predictive accuracy of the resulting models. Our approach succeeds at selecting the relevant features while avoiding irrelevant or redundant features. The single-criteria approach fails at avoiding irrelevant or redundant features and the stability selection approach fails at selecting enough relevant features for achieving acceptable predictive accuracy. For our approach, for data sets with many similar features, the feature selection stability must be evaluated with an adjusted stability measure, that is, a measure that considers similarities between features. For data sets with only few similar features, an unadjusted stability measure suffices and is faster to compute.


An Introduction to Machine Learning - Notes on New Technologies

#artificialintelligence

Humans learn from past experiences, Machines follow the instructions given by humans but, what if humans can train the machines to learn from the past experiences (data) and can do act much faster, here comes the concept of Machine Learning. Machine learning is the field of study that gives computers the capability to learn without being explicitly programmed. Machine learning algorithms build a mathematical model based on the data, known as training data, in order to make predictions or decisions. Machine learning is not only about learning, but also about understanding and reasoning. Machine Learning is not programmed, it is taught with data.


Study couples do not grow to look alike but finds people choose those with similar facial features

Daily Mail - Science & tech

It is a phenomenon that has fascinated scientists for decades – couples tend to look alike over time. The idea first surfaced in 1987, when researchers from the University of Michigan studied proposed that years of shared emotions resulted in closer resembles due to similar wrinkles and expressions. Now, a team from Stanford University is taking another look at the theory and found that people do not grow to look like their significant other, but choose them because of their similar facial features. The findings suggest that people search for a mate with the same features just as they do when it comes to finding a mate with the same values and personality traits. A team from Stanford University is taking another look at the theory and found that people do not grow to look like their significant other, but choose them because of their similar facial features.


Adjusted Measures for Feature Selection Stability for Data Sets with Similar Features

Bommert, Andrea, Rahnenführer, Jörg

arXiv.org Machine Learning

For data sets with similar features, for example highly correlated features, most existing stability measures behave in an undesired way: They consider features that are almost identical but have different identifiers as different features. Existing adjusted stability measures, that is, stability measures that take into account the similarities between features, have major theoretical drawbacks. We introduce new adjusted stability measures that overcome these drawbacks. We compare them to each other and to existing stability measures based on both artificial and real sets of selected features. Based on the results, we suggest using one new stability measure that considers highly similar features as exchangeable.


Transferred Discrepancy: Quantifying the Difference Between Representations

Feng, Yunzhen, Zhai, Runtian, He, Di, Wang, Liwei, Dong, Bin

arXiv.org Machine Learning

Understanding what information neural networks capture is an essential problem in deep learning, and studying whether different models capture similar features is an initial step to achieve this goal. Previous works sought to define metrics over the feature matrices to measure the difference between two models. However, different metrics sometimes lead to contradictory conclusions, and there has been no consensus on which metric is suitable to use in practice. In this work, we propose a novel metric that goes beyond previous approaches. Recall that one of the most practical scenarios of using the learned representations is to apply them to downstream tasks. We argue that we should design the metric based on a similar principle. For that, we introduce the transferred discrepancy (TD), a new metric that defines the difference between two representations based on their downstream-task performance. Through an asymptotic analysis, we show how TD correlates with downstream tasks and the necessity to define metrics in such a task-dependent fashion. In particular, we also show that under specific conditions, the TD metric is closely related to previous metrics. Our experiments show that TD can provide fine-grained information for varied downstream tasks, and for the models trained from different initializations, the learned features are not the same in terms of downstream-task predictions. We find that TD may also be used to evaluate the effectiveness of different training strategies. For example, we demonstrate that the models trained with proper data augmentations that improve the generalization capture more similar features in terms of TD, while those with data augmentations that hurt the generalization will not. This suggests a training strategy that leads to more robust representation also trains models that generalize better.


Microsoft Teams adds 'Together mode' in massive update

PCWorld

Microsoft today said that it's shaking up online Teams video meetings with a new "Together mode" that places participants in a virtual auditorium. It's all part of a redesigned Teams experience that capitalizes on some of the promises Microsoft has been making for years. On Wednesday, Microsoft said the company has spent much of the last few months rethinking the way in which video meetings were conducted. About 60 percent of those Microsoft surveyed said they felt less connected to their colleagues due to the coronavirus, so Microsoft's new Teams update tries to make nonverbal communication a priority. Several features play into this aspect of the Teams revamp, including chat bubbles, an updated way to "raise you hand" during meetings, and even live reactions.


Chameleon: Learning Model Initializations Across Tasks With Different Schemas

Brinkmeyer, Lukas, Drumond, Rafael Rego, Scholz, Randolf, Grabocka, Josif, Schmidt-Thieme, Lars

arXiv.org Artificial Intelligence

Parametric models, and particularly neural networks, require weight initialization as a starting point for gradient-based optimization. In most current practices, this is accomplished by using some form of random initialization. Instead, recent work shows that a specific initial parameter set can be learned from a population of tasks, i.e., dataset and target variable for supervised learning tasks. Using this initial parameter set leads to faster convergence for new tasks (model-agnostic meta-learning). Currently, methods for learning model initializations are limited to a population of tasks sharing the same schema, i.e., the same number, order, type and semantics of predictor and target variables. In this paper, we address the problem of meta-learning parameter initialization across tasks with different schemas, i.e., if the number of predictors varies across tasks, while they still share some variables. We propose Chameleon, a model that learns to align different predictor schemas to a common representation. We use permutations and masks of the predictors of the training tasks at hand. In experiments on real-life data sets, we show that Chameleon successfully can learn parameter initializations across tasks with different schemas providing a 26% lift on accuracy on average over random initialization and of 5% over a state-of-the-art method for fixed-schema learning model initializations. To the best of our knowledge, our paper is the first work on the problem of learning model initialization across tasks with different schemas.